Patients take care of what their teeth will be like after the orthodontics. Orthodontists usually describe the expectation movement based on the original smile images, which is unconvincing. The growth of deep-learning generative models change this situation. It can visualize the outcome of orthodontic treatment and help patients foresee their future teeth and facial appearance. While previous studies mainly focus on 2D or 3D virtual treatment outcome (VTO) at a profile level, the problem of simulating treatment outcome at a frontal facial image is poorly explored. In this paper, we build an efficient and accurate system for simulating virtual teeth alignment effects in a frontal facial image. Our system takes a frontal face image of a patient with visible malpositioned teeth and the patient's 3D scanned teeth model as input, and progressively generates the visual results of the patient's teeth given the specific orthodontics planning steps from the doctor (i.e., the specification of translations and rotations of individual tooth). We design a multi-modal encoder-decoder based generative model to synthesize identity-preserving frontal facial images with aligned teeth. In addition, the original image color information is used to optimize the orthodontic outcomes, making the results more natural. We conduct extensive qualitative and clinical experiments and also a pilot study to validate our method.
translated by 谷歌翻译
Recent advances in generative adversarial networks (GANs) have demonstrated the capabilities of generating stunning photo-realistic portrait images. While some prior works have applied such image GANs to unconditional 2D portrait video generation and static 3D portrait synthesis, there are few works successfully extending GANs for generating 3D-aware portrait videos. In this work, we propose PV3D, the first generative framework that can synthesize multi-view consistent portrait videos. Specifically, our method extends the recent static 3D-aware image GAN to the video domain by generalizing the 3D implicit neural representation to model the spatio-temporal space. To introduce motion dynamics to the generation process, we develop a motion generator by stacking multiple motion layers to generate motion features via modulated convolution. To alleviate motion ambiguities caused by camera/human motions, we propose a simple yet effective camera condition strategy for PV3D, enabling both temporal and multi-view consistent video generation. Moreover, PV3D introduces two discriminators for regularizing the spatial and temporal domains to ensure the plausibility of the generated portrait videos. These elaborated designs enable PV3D to generate 3D-aware motion-plausible portrait videos with high-quality appearance and geometry, significantly outperforming prior works. As a result, PV3D is able to support many downstream applications such as animating static portraits and view-consistent video motion editing. Code and models will be released at https://showlab.github.io/pv3d.
translated by 谷歌翻译
In this paper, we study the problem of visual grounding by considering both phrase extraction and grounding (PEG). In contrast to the previous phrase-known-at-test setting, PEG requires a model to extract phrases from text and locate objects from images simultaneously, which is a more practical setting in real applications. As phrase extraction can be regarded as a $1$D text segmentation problem, we formulate PEG as a dual detection problem and propose a novel DQ-DETR model, which introduces dual queries to probe different features from image and text for object prediction and phrase mask prediction. Each pair of dual queries is designed to have shared positional parts but different content parts. Such a design effectively alleviates the difficulty of modality alignment between image and text (in contrast to a single query design) and empowers Transformer decoder to leverage phrase mask-guided attention to improve performance. To evaluate the performance of PEG, we also propose a new metric CMAP (cross-modal average precision), analogous to the AP metric in object detection. The new metric overcomes the ambiguity of Recall@1 in many-box-to-one-phrase cases in phrase grounding. As a result, our PEG pre-trained DQ-DETR establishes new state-of-the-art results on all visual grounding benchmarks with a ResNet-101 backbone. For example, it achieves $91.04\%$ and $83.51\%$ in terms of recall rate on RefCOCO testA and testB with a ResNet-101 backbone. Code will be availabl at \url{https://github.com/IDEA-Research/DQ-DETR}.
translated by 谷歌翻译
In this paper we present a novel multi-attribute face manipulation method based on textual descriptions. Previous text-based image editing methods either require test-time optimization for each individual image or are restricted to single attribute editing. Extending these methods to multi-attribute face image editing scenarios will introduce undesired excessive attribute change, e.g., text-relevant attributes are overly manipulated and text-irrelevant attributes are also changed. In order to address these challenges and achieve natural editing over multiple face attributes, we propose a new decoupling training scheme where we use group sampling to get text segments from same attribute categories, instead of whole complex sentences. Further, to preserve other existing face attributes, we encourage the model to edit the latent code of each attribute separately via an entropy constraint. During the inference phase, our model is able to edit new face images without any test-time optimization, even from complex textual prompts. We show extensive experiments and analysis to demonstrate the efficacy of our method, which generates natural manipulated faces with minimal text-irrelevant attribute editing. Code and pre-trained model will be released.
translated by 谷歌翻译
最近,通过“向导”模拟游戏收集了一类以任务为导向的对话(TOD)数据集。但是,《巫师》数据实际上是模拟的数据,因此与现实生活中的对话根本不同,这些对话更加嘈杂和随意。最近,Seretod挑战赛是组织的,并发布了Mobilecs数据集,该数据集由来自中国移动的真实用户和客户服务人员之间的真实世界对话框组成。基于Mobilecs数据集,Seretod挑战具有两个任务,不仅评估了对话系统本身的构建,而且还检查了对话框成绩单中的信息提取,这对于建立TOD的知识库至关重要。本文主要介绍了Mobilecs数据集对这两项任务的基线研究。我们介绍了如何构建两个基线,遇到的问题以及结果。我们预计基线可以促进令人兴奋的未来研究,以建立针对现实生活任务的人类机器人对话系统。
translated by 谷歌翻译
由于遮挡引起的严重观察,基于手动对象相互作用的单个基于手动对象相互作用的重建具有挑战性。本文提出了一种基于物理的方法,以更好地解决重建中的歧义。它首先提出了一个基于力的动力学模型,该模型不仅恢复了未观察到的触点,而且还解决了合理的接触力。接下来,提出了一种基于置信的幻灯片预防方案,该方案将运动学上的信心和接触力都结合在一起,共同模拟静态和滑动接触运动。定性和定量实验表明,该提出的技术在物理上可行,更准确的手动相互作用,并使用单个RGBD传感器实时估计可见的接触力。
translated by 谷歌翻译
供应链平台(SCP)为下游行业提供了许多原材料。与传统的电子商务平台相比,由于用户兴趣有限,SCP中的数据更为稀疏。为了解决数据稀疏问题,可以应用跨域建议(CDR),从而通过源域信息提高目标域的建议性能。但是,将CDR应用于SCP,直接忽略了SCP中商品的层次结构,从而降低了建议性能。为了利用此功能,在本文中,我们以餐饮平台为例,并提出了图形跨域推荐模型GRES。该模型首先构造了树状图,以表示菜肴和成分不同节点的层次结构,然后应用我们提出的Tree2Vec方法将GCN和BERT模型组合到嵌入图中以嵌入图表以获取建议。商业数据集上的实验结果表明,GRES在供应链平台的跨域建议中明显优于最先进的方法。
translated by 谷歌翻译
大多数现有的插槽填充模型倾向于记住实体的固有模式和培训数据中相应的上下文。但是,这些模型在暴露于口语语言扰动或实践中的变化时会导致系统故障或不良输出。我们提出了一种扰动的语义结构意识转移方法,用于训练扰动插槽填充模型。具体而言,我们介绍了两种基于传销的培训策略,以分别从无监督的语言扰动语料库中分别学习上下文语义结构和单词分布。然后,我们将从上游训练过程学到的语义知识转移到原始样本中,并通过一致性处理过滤生成的数据。这些程序旨在增强老虎机填充模型的鲁棒性。实验结果表明,我们的方法始终优于先前的基本方法,并获得强有力的概括,同时阻止模型记住实体和环境的固有模式。
translated by 谷歌翻译
失明和低视力(PBLV)的人在定位最终目的地或针对陌生环境中的特定物体时面临重大挑战。此外,除了最初定位和定位目标对象外,从目前的立场接近最终目标通常是令人沮丧和挑战,尤其是当人们摆脱最初的计划途径以避免障碍时。在本文中,我们开发了一种新颖的可穿戴导航解决方案,以为用户提供实时指导,以便在不熟悉的环境中有效地接近感兴趣的目标对象。我们的系统包含两个关键的视觉计算函数:在3D中以3D为中的初始目标对象定位以及对用户轨迹的连续估计,这既基于由用户胸部前面安装在用户胸前的低成本单眼相机捕获的2D视频。这些功能使系统能够提出初始导航路径,在用户移动时不断更新路径,并及时提供有关用户路径校正的建议。我们的实验表明,我们的系统能够以室外和室内的误差小于0.5米的误差操作。该系统完全基于视觉,并且不需要其他传感器进行导航,并且可以使用可穿戴系统中的Jetson处理器进行计算以促进实时导航辅助。
translated by 谷歌翻译
修剪技术可全面使用图像分类压缩卷积神经网络(CNN)。但是,大多数修剪方法需要一个经过良好训练的模型,以提供有用的支持参数,例如C1-核心,批处理值和梯度信息,如果预训练的模型的参数为,这可能会导致过滤器评估的不一致性不太优化。因此,我们提出了一种基于敏感性的方法,可以通过为原始模型增加额外的损害来评估每一层的重要性。由于准确性的性能取决于参数在所有层而不是单个参数中的分布,因此基于灵敏度的方法将对参数的更新具有鲁棒性。也就是说,我们可以获得对不完美训练和完全训练的模型之间每个卷积层的相似重要性评估。对于CIFAR-10上的VGG-16,即使原始模型仅接受50个时期训练,我们也可以对层的重要性进行相同的评估,并在对模型进行充分训练时的结果。然后,我们将通过量化的灵敏度从每一层中删除过滤器。我们基于敏感性的修剪框架在VGG-16,分别具有CIFAR-10,MNIST和CIFAR-100的VGG-16上有效验证。
translated by 谷歌翻译